LAFD: Local-differentially Private and Asynchronous Federated Learning with Direct Feedback Alignment
نویسندگان
چکیده
Federated learning is a promising approach for training machine models using distributed data from multiple mobile devices. However, privacy concerns arise when sensitive are used training. In this paper, we discuss the challenges of applying local differential to federated learning, which compounded by limited resources clients and asynchronicity learning. To address these challenges, propose framework called LAFD, that stands Local-differentially Private Asynchronous Learning with Direct Feedback Alignment. LAFD consists two parts: (a) LFL-DFALS: Local differentially private Alignment Layer Sampling, (b) AFL-LMTGR: Model Training Gradient Rebalancing. LFL-DFALS effectively reduces computation communication costs via direct feedback alignment layer sampling. AFL-LMTGR handles problem stragglers model training, gradient rebalancing. We demonstrate performance through experiments.
منابع مشابه
Differentially Private Local Electricity Markets
Privacy-preserving electricity markets have a key role in steering customers towards participation in local electricity markets by guarantying to protect their sensitive information. Moreover, these markets make it possible to statically release and share the market outputs for social good. This paper aims to design a market for local energy communities by implementing Differential Privacy (DP)...
متن کاملDifferentially Private Federated Learning: A Client Level Perspective
Federated learning is a recent advance in privacy protection. In this context, a trusted curator aggregates parameters optimized in decentralized fashion by multiple clients. The resulting model is then distributed back to all clients, ultimately converging to a joint representative model without explicitly having to share the data. However, the protocol is vulnerable to differential attacks, w...
متن کاملDifferentially Private Learning with Kernels
In this paper, we consider the problem of differentially private learning where access to the training features is through a kernel function only. As mentioned in Chaudhuri et al. (2011), the problem seems to be intractable for general kernel functions in the standard learning model of releasing different private predictor. We study this problem in three simpler but practical settings. We first...
متن کاملDifferentially Private Online Learning
In this paper, we consider the problem of preserving privacy in the online learning setting. Online learning involves learning from the data in real-time, so that the learned model as well as its outputs are also continuously changing. This makes preserving privacy of each data point significantly more challenging as its effect on the learned model can be easily tracked by changes in the subseq...
متن کاملDirect Feedback Alignment Provides Learning in Deep Neural Networks
Artificial neural networks are most commonly trained with the back-propagation algorithm, where the gradient for learning is provided by back-propagating the error, layer by layer, from the output layer to the hidden layers. A recently discovered method called feedback-alignment shows that the weights used for propagating the error backward don’t have to be symmetric with the weights used for p...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Access
سال: 2023
ISSN: ['2169-3536']
DOI: https://doi.org/10.1109/access.2023.3304704